Point cloud registration (PCR) is a popular research topic in computer vision. Recently, the registration method in an evolutionary way has received continuous attention because of its robustness to the initial pose and flexibility in objective function design. However, most evolving registration methods cannot tackle the local optimum well and they have rarely investigated the success ratio, which implies the probability of not falling into local optima and is closely related to the practicality of the algorithm. Evolutionary multi-task optimization (EMTO) is a widely used paradigm, which can boost exploration capability through knowledge transfer among related tasks. Inspired by this concept, this study proposes a novel evolving registration algorithm via EMTO, where the multi-task configuration is based on the idea of solution space cutting. Concretely, one task searching in cut space assists another task with complex function landscape in escaping from local optima and enhancing successful registration ratio. To reduce unnecessary computational cost, a sparse-to-dense strategy is proposed. In addition, a novel fitness function robust to various overlap rates as well as a problem-specific metric of computational cost is introduced. Compared with 7 evolving registration approaches and 4 traditional registration approaches on the object-scale and scene-scale registration datasets, experimental results demonstrate that the proposed method has superior performances in terms of precision and tackling local optima.
translated by 谷歌翻译
结构化的修剪技术在用于图像分类任务的卷积神经网络上取得了出色的压缩性能。但是,大多数现有方法都是面向重量的,当原始模型的训练不佳时,它们的修剪结果可能不令人满意。也就是说,需要一个全面训练的模型来提供有用的权重信息。这可能是耗时的,并且修剪结果对模型参数的更新过程敏感。在本文中,我们提出了一个名为“平均过滤器信息熵(AFIE)”的度量,以测量每个滤镜的重要性。它是由三个主要步骤计算得出的,即每个卷积层的“输入输出”矩阵的低排放分解,所获得的特征值的归一化以及基于信息熵的滤波器重要性计算。通过利用拟议的AFIE,无论是否完全训练原始模型,建议的框架都能对每个过滤器进行稳定的重要性评估。我们基于Alexnet,VGG-16和Resnet-50实施AFIE,并分别对MNIST,CIFAR-10和Imagenet进行测试。实验结果令人鼓舞。我们出乎意料地观察到,对于我们的方法,即使原始模型仅经过一个时代的训练,每个过滤器的重要性评估在模型经过全面训练时都与结果相同。这表明拟议的修剪策略可以在原始模型的训练过程的开始阶段有效地执行。
translated by 谷歌翻译
修剪技术可全面使用图像分类压缩卷积神经网络(CNN)。但是,大多数修剪方法需要一个经过良好训练的模型,以提供有用的支持参数,例如C1-核心,批处理值和梯度信息,如果预训练的模型的参数为,这可能会导致过滤器评估的不一致性不太优化。因此,我们提出了一种基于敏感性的方法,可以通过为原始模型增加额外的损害来评估每一层的重要性。由于准确性的性能取决于参数在所有层而不是单个参数中的分布,因此基于灵敏度的方法将对参数的更新具有鲁棒性。也就是说,我们可以获得对不完美训练和完全训练的模型之间每个卷积层的相似重要性评估。对于CIFAR-10上的VGG-16,即使原始模型仅接受50个时期训练,我们也可以对层的重要性进行相同的评估,并在对模型进行充分训练时的结果。然后,我们将通过量化的灵敏度从每一层中删除过滤器。我们基于敏感性的修剪框架在VGG-16,分别具有CIFAR-10,MNIST和CIFAR-100的VGG-16上有效验证。
translated by 谷歌翻译
多视图点云注册在3D重建中至关重要。由于从不同角度捕获的点云之间存在密切的连接,因此如果正确利用这些连接,则可以增强注册性能。因此,本文将注册问题建模为多任务优化,并提出了一种新颖的双通道知识共享机制,以有效,有效地解决问题。多视点云注册作为多任务优化的建模是双重的。通过同时考虑两个点云的局部精度以及所涉及的所有点云带来的全局一致性,得出了具有自适应阈值的健身函数。还定义了共同进化搜索过程的框架,以同时优化属于相关任务的多个健身函数。为了提高解决方案质量和收敛速度,拟议的双通道知识共享机制发挥了作用。任务内的知识共享引入了求解更简单的帮助任务,并且在辅助任务和原始任务上共享有用的信息,从而加速了搜索过程。任务间知识共享探讨了原始任务中埋葬的共同点,旨在防止任务陷入本地Optima。在模型对象以及场景点云上进行的综合实验显示了所提出的方法的功效。
translated by 谷歌翻译
最近,最大化的互信息是一种强大的无监测图表表示学习的方法。现有方法通常有效地从拓扑视图中捕获信息但忽略特征视图。为了规避这个问题,我们通过利用功能和拓扑视图利用互信息最大化提出了一种新的方法。具体地,我们首先利用多视图表示学习模块来更好地捕获跨图形上的特征和拓扑视图的本地和全局信息内容。为了模拟由特征和拓扑空间共享的信息,我们使用相互信息最大化和重建损耗最小化开发公共表示学习模块。要明确鼓励图形表示之间的多样性在相同的视图中,我们还引入了一个分歧正则化,以扩大同一视图之间的表示之间的距离。合成和实际数据集的实验证明了集成功能和拓扑视图的有效性。特别是,与先前的监督方法相比,我们所提出的方法可以在无监督的代表和线性评估协议下实现可比或甚至更好的性能。
translated by 谷歌翻译
Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields.
translated by 谷歌翻译
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.
translated by 谷歌翻译
We study the problem of semantic segmentation calibration. For image classification, lots of existing solutions are proposed to alleviate model miscalibration of confidence. However, to date, confidence calibration research on semantic segmentation is still limited. We provide a systematic study on the calibration of semantic segmentation models and propose a simple yet effective approach. First, we find that model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration. Among them, prediction correctness, especially misprediction, is more important to miscalibration due to over-confidence. Next, we propose a simple, unifying, and effective approach, namely selective scaling, by separating correct/incorrect prediction for scaling and more focusing on misprediction logit smoothing. Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration. We conduct extensive experiments with a variety of benchmarks on both in-domain and domain-shift calibration, and show that selective scaling consistently outperforms other methods.
translated by 谷歌翻译
In this paper, we propose a large-scale language pre-training for text GENeration using dIffusion modEl, which is named GENIE. GENIE is a pre-training sequence-to-sequence text generation model which combines Transformer and diffusion. The diffusion model accepts the latent information from the encoder, which is used to guide the denoising of the current time step. After multiple such denoise iterations, the diffusion model can restore the Gaussian noise to the diverse output text which is controlled by the input text. Moreover, such architecture design also allows us to adopt large scale pre-training on the GENIE. We propose a novel pre-training method named continuous paragraph denoise based on the characteristics of the diffusion model. Extensive experiments on the XSum, CNN/DailyMail, and Gigaword benchmarks shows that GENIE can achieves comparable performance with various strong baselines, especially after pre-training, the generation quality of GENIE is greatly improved. We have also conduct a lot of experiments on the generation diversity and parameter impact of GENIE. The code for GENIE will be made publicly available.
translated by 谷歌翻译
Developing autonomous vehicles (AVs) helps improve the road safety and traffic efficiency of intelligent transportation systems (ITS). Accurately predicting the trajectories of traffic participants is essential to the decision-making and motion planning of AVs in interactive scenarios. Recently, learning-based trajectory predictors have shown state-of-the-art performance in highway or urban areas. However, most existing learning-based models trained with fixed datasets may perform poorly in continuously changing scenarios. Specifically, they may not perform well in learned scenarios after learning the new one. This phenomenon is called "catastrophic forgetting". Few studies investigate trajectory predictions in continuous scenarios, where catastrophic forgetting may happen. To handle this problem, first, a novel continual learning (CL) approach for vehicle trajectory prediction is proposed in this paper. Then, inspired by brain science, a dynamic memory mechanism is developed by utilizing the measurement of traffic divergence between scenarios, which balances the performance and training efficiency of the proposed CL approach. Finally, datasets collected from different locations are used to design continual training and testing methods in experiments. Experimental results show that the proposed approach achieves consistently high prediction accuracy in continuous scenarios without re-training, which mitigates catastrophic forgetting compared to non-CL approaches. The implementation of the proposed approach is publicly available at https://github.com/BIT-Jack/D-GSM
translated by 谷歌翻译